The Polygraph Place

Thanks for stopping by our bulletin board.
Please take just a moment to register so you can post your own questions
and reply to topics. It is free and takes only a minute to register. Just click on the register link


  Polygraph Place Bulletin Board
  Professional Issues - Private Forum for Examiners ONLY
  Questions on validated techniques (Page 2)

Post New Topic  Post A Reply
profile | register | preferences | faq | search

This topic is 2 pages long:   1  2  next newest topic | next oldest topic
Author Topic:   Questions on validated techniques
Dan Mangan
Member
posted 05-02-2007 04:20 AM     Click Here to See the Profile for Dan Mangan     Edit/Delete Message
Great work,stat!

IP: Logged

stat
Member
posted 05-02-2007 07:30 AM     Click Here to See the Profile for stat     Edit/Delete Message
Barry said: "One of the findings in the (DoDPI) TES studies was that a person could lie to R1 but go DI on R2 (to which he was truthful). Why? We don't know. They could catch the deceptive, but they couldn't necessarily say which RQ the person was lying about. Why doesn't that carry over into a single-issue CQT?"

I have ran into this phenom on many occasions myself. Super point. I can't help but ponder how or if the anomally works the same on controls as an examinee might not necessarily descriminate between R's and C's. The whole thing is crazy and can affect interrogative confidence----the most valuable (IMO) and grossly underrated component of a poly test.
When I describe "Anticlimactic Dampening" in lectures and training to neophytes/newbies, I state that it's like a guy who was in an aweful car accident--his leg is bent backwards and he's writhing in pain. The doc asks what's wrong with him and he cries "my leg!". Does that mean he DOESN'T have broken ribs? Of course not. People seem to grasp that highly simplified (and perhaps inaccurate) description. If I don't use such anectdotes, my customers refuse to understand multi-issue testing results as it defies common sense.Perhaps my euristic approach is for my own benefit also------I'm not a fan of such limitations.

[This message has been edited by stat (edited 05-02-2007).]

[This message has been edited by stat (edited 05-02-2007).]

IP: Logged

stat
Member
posted 05-02-2007 08:17 AM     Click Here to See the Profile for stat     Edit/Delete Message
Assuming that there is a degree of truth in my analogy, perhaps polygraph works not on FFF, but instead on thresholds of emotional pain. It's a stretch for sure, but perhaps a person arouses to adjacent questions because despite being truthful to them, they add emotional insult to injury. Perhaps I need more coffee to see my own folly here.
Maybe controls should contain some form of insult like: Since before last December, have you done something stupid that if known today would make people think you're a moron? [crooked smile]

[This message has been edited by stat (edited 05-02-2007).]

IP: Logged

Bill2E
Member
posted 05-03-2007 12:42 AM     Click Here to See the Profile for Bill2E     Edit/Delete Message
Getting back to acceptable format's, Looks like Backster is still in the running for top. You rotate the relevants, you have a control prior to a relevant in all circumstances and you deal with only one issue period. The data supports this type examination. The difference is in the scoring, and that can be adjusted if necessary. If you have reaction to the relevant and the control, my thought is you have problems with the answer to the Relevant and you are seeing reaction, could be your question is off topic a bit or your control is encompassing of the relevant and competing with it. There are of course other explinations, these are just basic and simple items to look at.

IP: Logged

Barry C
Member
posted 05-03-2007 03:59 PM     Click Here to See the Profile for Barry C   Click Here to Email Barry C     Edit/Delete Message
Yes, I believe validity studies would show the Backster test (single-issues) to be valid. His scoring system is another story. It's probably valid; it's just not going to give you the best accuracy.

quote:
If you have reaction to the relevant and the control, my thought is you have problems with the answer to the Relevant and you are seeing reaction, could be your question is off topic a bit or your control is encompassing of the relevant and competing with it. There are of course other explinations, these are just basic and simple items to look at.

You sound like a Backster grad? The research out of Utah (and Dr. Kircher showed the slides at AAPP) shows that equal reactions to the RQ and CQ are indicative of truthfulness. In other words, if you lay the RQ over the CQ they often look all too similar. The Utah "fix" is to bias the test in favor of the truthful to make up for that difference as has been mentioned here briefly. Your comparison is always competing with the RQ - that's the point. Not expecting to see a reaction to the RQ (in the truthful) is not real life. In the end (even with the Utah test), the test is still biased against the truthful.

I know you don't like the term "biased" Ray, as it's got a scientific meaning, but it's got a common (layman's) meaning too. We could use its synonym, "prejudice," but that has a legal meaning, which would get the lawyers mad a me, so I can't use that one either.

IP: Logged

rnelson
Member
posted 05-03-2007 05:11 PM     Click Here to See the Profile for rnelson   Click Here to Email rnelson     Edit/Delete Message
I hear you Barry, and I'm not stressing out over the word. I just thought that someone should point out that if we're not thoughtful about the subtle but important inaccuracy of usage then that will tend to alter the long-term effectiveness or productivity of the discussion. Your right about the word "prejudice."

It might make more sense to simply discuss the differences in sensitivity (ability to detect deception), and specificity (ability to accurately reject truthful cases from further concern).

I think Skip has a sound point here, and its the same one I was trying to make about the lack of real differences in construct validity.

Your point about scoring differences is really a difference in what we consider "valid" (read: good enough), in terms of operational efficiency. Its really post-hoc scoring where most differences exist - the reaction features we think are indicative of deception (and what research we think established that understanding), and how we do the math.

I'm not sure I agree that the Utah test fixes the bias you describe, as the Utah test presently uses ratios that do not seem to allow a score other than zero when the RQ and CQ reactions are approximately equal. I do think its very rare to see reactions only to the RQs or only to the CQs, and I do not agree with the attribution of some defect to the RQ or CQ when that occurs.

I also think that a lot of researchers and test designers would be very uncomfortable with the idea of mid-stream changes to test stimulus questions, based on our assumptions about the causes of reactions during the test. That is the impression I get regarding the Backster procedures. I didn't go to Backster's school, but I heard him talk at APA, so please correct me if I am misunderstanding the idea that he might endorse adjustments to questions during the test.

Imagine trying to interpret the results of an IQ or personality test, if we started altering the stimulus during the test, because we thought someone's scores were too high, or not high enough. The result would be a test result that is perhaps not completely useless, and perhaps even very well informed clinically. But it would be a clinical impression, not an empirical result. Replication and would be difficult to achieve, and the overall validity of a test would be limited by reliability flaws.

So we perhaps have to decide: do we want the polygraph to be an impressionistic (non-measurement, non-mathematical, and non-scientific) test, or a science based test. If we choose the route of science, then we have to endorse scientific principles, and a little bit of math. If not, if we want a test devoid of the confines and demands of math and scientific testing principles, then we as well stop talking about things like validity.

One of the biggest empirical challenges facing our profession, in my humble opinion, is to outgrow the mode of "schools of thought" regarding scoring rules and move increasingly towards scoring rules that are based in sound mathematical and testing principles and supported by data. Then we have to learn to assimilate new information, including perhaps new methods, into existing knowledge and methods, without falling prey to ego-driven (or worse: proprietary-profit driven ) schisms based on misguided loyalty to a "school of thought."

Consider this, if some research oriented nurse at The Children's Hospital conducts a study that validates the premise that a new procedure results in a significant reduction in bed-sores or catheter infections, Lutheran, General, and St Luke hospitals will not refuse the benefits of the new and improved technique simply because it came from another hospital chain. Neither are they reluctant to adjust their procedures simply because the new procedure was not endorsed by their original trainers at the time of some training that occurred way back when.


r


------------------
"Gentlemen, you can't fight in here. This is the war room."
--(Stanley Kubrick/Peter Sellers - Dr. Strangelove, 1964)


IP: Logged

Barry C
Member
posted 05-03-2007 05:49 PM     Click Here to See the Profile for Barry C   Click Here to Email Barry C     Edit/Delete Message
quote:
I'm not sure I agree that the Utah test fixes the bias you describe, as the Utah test presently uses ratios that do not seem to allow a score other than zero when the RQ and CQ reactions are approximately equal.

The reason I put the word (fix) in quotes is because it's not a literal "fix," but rather what the Utah test does to help even the playing field, so to speak. It's the same thing we've said here before, and Skip has been preaching it hard for a while: We don't have to do anything to get the liars to go DI, but we do have to work at getting the truthful to NDI. We do that with a good pre-test, an optimally sequenced question format, and a sound scoring system, among other things.

quote:
I also think that a lot of researchers and test designers would be very uncomfortable with the idea of mid-stream changes to test stimulus questions, based on our assumptions about the causes of reactions during the test. That is the impression I get regarding the Backster procedures. I didn't go to Backster's school, but I heard him talk at APA, so please correct me if I am misunderstanding the idea that he might endorse adjustments to questions during the test.

Jack Consigli taught the Backster techniques at my polygraph school, and that's what I took away from it. Matte taught his techniques, and he said the same thing. His book (the big one) has quite a list of how to make necessary adjustments mid stream. If there's a reaction to the RQ and the CQ, they assume the CQ is too strong and weaken it - or something like that. (It's been a while.) So, yes, I think you're right, and I'd appreciate knowing if I've got it wrong.

quote:
Consider this, if some research oriented nurse at The Children's Hospital conducts a study that validates the premise that a new procedure results in a significant reduction in bed-sores or catheter infections, Lutheran, General, and St Luke hospitals will not refuse the benefits of the new and improved technique simply because it came from another hospital chain. Neither are they reluctant to adjust their procedures simply because the new procedure was not endorsed by their original trainers at the time of some training that occurred way back when.

Amen. That's beautiful. Don't be surprised if it shows up in some of my future presentations! I'll cite you, of course.

IP: Logged

Barry C
Member
posted 05-03-2007 05:54 PM     Click Here to See the Profile for Barry C   Click Here to Email Barry C     Edit/Delete Message
Ray,

I forgot to mention. Dr. Kircher said the CQs go first as they will get the greater reaction. He also said a SR is probably a good idea in that it helps habituate the examinee to the relevant issue (a plus to the truthful). He admits there is no data he is aware of that would support that position though. Did you look at SRs when you looked at RQs? That is, do they actually get the strongest responses.

Of course, the SR comes before any CQs, which leads to additional questions for those of us who have no cure for this disease.

I had another point that has completely slipped my mind, but it's been a long day.

IP: Logged

rnelson
Member
posted 05-03-2007 09:12 PM     Click Here to See the Profile for rnelson   Click Here to Email rnelson     Edit/Delete Message
Barry,

Thanks for the article. I'll read it later tonight when I have a moment.

I understand the no cure problem - this is a magnificent obsession. But its springtime now, and that means I need to clear space in my head to think about some really important things like motorcycles, the water fording abilities of my old truck, and how I'm going to coerce my son to climb a few more mountains.

On habituation: of course its habituation - Kircher simply uses garden variety psychobabble, instead of our more esoteric polybabble, when describing the possible role of the SR.

We bootstrapped OSS-3 on the digitized archival dataset that OSS1 and 2 were trained on. So, I don't have access to the charts or SR measurements for the training data or validation data. I have the SR data for the the mixed format cases, and the LEPET and PCSOT datasets, but I'm not sure that will really answer any questions, as there is a lot of variability in the numbers of RQs, # of charts, and whether or not there was an SR in the test. It would require a bootstrap ANOVA to get to the question, and the likelihood of finding some thing significant. is kind of a shot in the dark - so, probably not worth the intense hassle of setting up the code to do all that resampling.

Keep in mind that the SR could help habituation to the RQ whether or not it produces the strongest reaction. So, it would require a control dataset to evaluate properly. As far as I know, only the Army-MGQT technique, and earlier GQTs, don't use SRs, and there are other important differences in those tests, compared with other techniques. So, we might never know.

We recently did a bootstrap ANOVA of the habituation between RQs, using 1000 resample sets of (n=292) with the hypothesis that R5 might produce more differential reactivity than R7 and R10, but failed to find any significant difference. Its a common assumption, so we are left to wonder that we examiner do sometimes (anecdotally) see increasing or decreasing trend in response to success RQs and succesives charts. But it might be our human desire for sensible answers that generalizes a few anecdots to a spurious assumption that might not be consistent enough to be supportable by data.


r


------------------
"Gentlemen, you can't fight in here. This is the war room."
--(Stanley Kubrick/Peter Sellers - Dr. Strangelove, 1964)


IP: Logged

rnelson
Member
posted 05-03-2007 09:12 PM     Click Here to See the Profile for rnelson   Click Here to Email rnelson     Edit/Delete Message
duplicated post - sorry

Impatiently clicking away, while cursing at Sprint painfully slow network.

r

[This message has been edited by rnelson (edited 05-03-2007).]

IP: Logged

stat
Member
posted 05-03-2007 09:17 PM     Click Here to See the Profile for stat     Edit/Delete Message
On a peripheral, Barry you said:

"If there's a reaction to the RQ and the CQ, they assume the CQ is too strong and weaken it - or something like that. (It's been a while.) So, yes, I think you're right, and I'd appreciate knowing if I've got it wrong."
Given that you admitted that it's "been a while", I was just curious as to how a person can "weaken" a control---did you mean literarily, or by some form of verbal remarks? It caught my attention and I wondered if there is a sure fire way of doing so ethically.

IP: Logged

Barry C
Member
posted 05-04-2007 02:52 PM     Click Here to See the Profile for Barry C   Click Here to Email Barry C     Edit/Delete Message
Here you go stat - right from "The Backster Technique" as presented by Jack Consigli.

"'Tri-Zone' Reaction Combinations":

"INDICATION: D3: Presence of response to one or both of green zone questions in addition to the red zone question indicates a serious green zone question defect."

Translation: Reactions in CQ on either side of an RQ with a response means your CQs are too hot.

"REMEDY: D4: Reduce intensity of green zone questions by altering subject age categories or changing scope of green zone questions."

Matte adopted a very similar reaction combination chart for his tests.

For the non-Backster people, the color codes (for the zones) are as follows:

Green = CQ

Red = RQ

Black = OI

There are others, but you'll see those the most.

Additionally, in 1983 Backster came up with a paper he titled "Comparative Technique Question Problem Areas."

Here are some of the problems:

RQs:

Mixing target issues

Too many relevant issues

Lack of Sr

CQs:

Lack of time bar

Overpowering CQ

Reviewed answer reversal

Guilt-complex "non-lie"

Misplaced pseudo-relevant

OIs:

Lack of symptomatic question

Single symptomatic question usage

NQs / IQs:

Mid chart location

Stigmatic quality

Excessive routine usage

I just listed some of them. Just because they are here doesn't mean they are right or wrong. If Cleve's changed his position on any of these, please speak up if you're in the know.

I forgot to add he wrote (in the explanation section for these issues), "Validity of the listed problem areas based upon tentative acceptance of the 'Flow of psychological set.'"

[This message has been edited by Barry C (edited 05-04-2007).]

IP: Logged

Bill2E
Member
posted 05-09-2007 09:01 AM     Click Here to See the Profile for Bill2E     Edit/Delete Message
I don't see anyone discussing the pretest here. That is where you set the subject on the control or the relevant issue. If we do a proper pretest, the charts seem to be clear. Yes the polygraph is biased against the truthful IF we don't set the CQ's in the pretest. We can debate scoring rules all day, if we have not done a good job in the pretest, scoring rules don't help much. I agree that we will see less reaction to the CQ's on truthful persons and more reaction to Relevants on untruthful persons regardless of the pretest, however we will not see significant reaction to relevants on NDI subjects IF we set the controls from the begining of the pretest. That is an examiner problem, not a problem with the test format.

IP: Logged

Barry C
Member
posted 05-09-2007 12:32 PM     Click Here to See the Profile for Barry C   Click Here to Email Barry C     Edit/Delete Message
quote:
Yes the polygraph is biased against the truthful IF we don't set the CQ's in the pretest.

No, it's biased even if we do a proper pre-test, and that's the point.

quote:
we will not see significant reaction to relevants on NDI subjects IF we set the controls from the begining of the pretest. That is an examiner problem, not a problem with the test format.

No, we do tend to see equal reactions to both the CQs and the RQs in the truthful, and that's one of the problems. Dr. Kircher showed a slide at AAPP to help make that point. Have you read the studies on the positions of RQs and CQs in tests? You'll see that if the RQ precedes the CQ the scores are more negative than if the question sequence is reversed. The pretest has nothing to do with that. The order of the questions helps to get the CQ reactions to be the more salient. Having asymmetrical cut-off scores helps as well. Again, that has nothing to do with the pre-test.

Yes, you are right that a bad pre-test probably can't be corrected by other means, and a proper pre-test is a crucial part of a valid test. If you're depending on that alone though, you're accuracy will suffer.

IP: Logged

Bill2E
Member
posted 05-10-2007 06:01 AM     Click Here to See the Profile for Bill2E     Edit/Delete Message
I have no argument with CQ's then RQ's in sequence, it makes sense and is backed up with research that is valid. Again, if we set the CQ's from the onset of the pretest, we have less problems. That is my point. We will never be 100%, we can get closer to 100% if the entire exam is properly conducted, and especially the pretest portion.

IP: Logged

Barry C
Member
posted 05-10-2007 07:58 AM     Click Here to See the Profile for Barry C   Click Here to Email Barry C     Edit/Delete Message
Yes, I think we'd all be in agreement here. (And, we'll all make Skip happy knowing he's been effective in making the point!) If you mess up the pre-test, you mess up the whole test. If you do it correctly, then yes, you're going to be closer to that 100% goal. Keeping these other principles in mind will get you even closer to that goal.

IP: Logged

This topic is 2 pages long:   1  2 

All times are PT (US)

next newest topic | next oldest topic

Administrative Options: Close Topic | Archive/Move | Delete Topic
Post New Topic  Post A Reply
Hop to:

Contact Us | The Polygraph Place

copyright 1999-2003. WordNet Solutions. All Rights Reserved

Powered by: Ultimate Bulletin Board, Version 5.39c
© Infopop Corporation (formerly Madrona Park, Inc.), 1998 - 1999.